44 research outputs found

    Imaging plus X: multimodal models of neurodegenerative disease

    Get PDF
    PURPOSE OF REVIEW: This article argues that the time is approaching for data-driven disease modelling to take centre stage in the study and management of neurodegenerative disease. The snowstorm of data now available to the clinician defies qualitative evaluation; the heterogeneity of data types complicates integration through traditional statistical methods; and the large datasets becoming available remain far from the big-data sizes necessary for fully data-driven machine-learning approaches. The recent emergence of data-driven disease progression models provides a balance between imposed knowledge of disease features and patterns learned from data. The resulting models are both predictive of disease progression in individual patients and informative in terms of revealing underlying biological patterns. RECENT FINDINGS: Largely inspired by observational models, data-driven disease progression models have emerged in the last few years as a feasible means for understanding the development of neurodegenerative diseases. These models have revealed insights into frontotemporal dementia, Huntington's disease, multiple sclerosis, Parkinson's disease and other conditions. For example, event-based models have revealed finer graded understanding of progression patterns; self-modelling regression and differential equation models have provided data-driven biomarker trajectories; spatiotemporal models have shown that brain shape changes, for example of the hippocampus, can occur before detectable neurodegeneration; and network models have provided some support for prion-like mechanistic hypotheses of disease propagation. The most mature results are in sporadic Alzheimer's disease, in large part because of the availability of the Alzheimer's disease neuroimaging initiative dataset. Results generally support the prevailing amyloid-led hypothetical model of Alzheimer's disease, while revealing finer detail and insight into disease progression. SUMMARY: The emerging field of disease progression modelling provides a natural mechanism to integrate different kinds of information, for example from imaging, serum and cerebrospinal fluid markers and cognitive tests, to obtain new insights into progressive diseases. Such insights include fine-grained longitudinal patterns of neurodegeneration, from early stages, and the heterogeneity of these trajectories over the population. More pragmatically, such models enable finer precision in patient staging and stratification, prediction of progression rates and earlier and better identification of at-risk individuals. We argue that this will make disease progression modelling invaluable for recruitment and end-points in future clinical trials, potentially ameliorating the high failure rate in trials of, e.g., Alzheimer's disease therapies. We review the state of the art in these techniques and discuss the future steps required to translate the ideas to front-line application.This is an open access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. http://creativecommons.org/licenses/by-nc-nd/4.0

    Degenerative Adversarial NeuroImage Nets: Generating Images that Mimic Disease Progression

    Get PDF
    Simulating images representative of neurodegenerative diseases is important for predicting patient outcomes and for validation of computational models of disease progression. This capability is valuable for secondary prevention clinical trials where outcomes and screening criteria involve neuroimaging. Traditional computational methods are limited by imposing a parametric model for atrophy and are extremely resource-demanding. Recent advances in deep learning have yielded data-driven models for longitudinal studies (e.g., face ageing) that are capable of generating synthetic images in real-time. Similar solutions can be used to model trajectories of atrophy in the brain, although new challenges need to be addressed to ensure accurate disease progression modelling. Here we propose Degenerative Adversarial NeuroImage Net (DaniNet)—a new deep learning approach that learns to emulate the effect of neurodegeneration on MRI by simulating atrophy as a function of ages, and disease progression. DaniNet uses an underlying set of Support Vector Regressors (SVRs) trained to capture the patterns of regional intensity changes that accompany disease progression. DaniNet produces whole output images, consisting of 2D-MRI slices that are constrained to match regional predictions from the SVRs. DaniNet is also able to maintain the unique brain morphology of individuals. Adversarial training ensures realistic brain images and smooth temporal progression. We train our model using 9652 T1-weighted (longitudinal) MRI extracted from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. We perform quantitative and qualitative evaluations on a separate test set of 1283 images (also from ADNI) demonstrating the ability of DaniNet to produce accurate and convincing synthetic images that emulate disease progression

    Quantum Filtering One Bit at a Time

    Get PDF
    In this Letter we consider the purification of a quantum state using the information obtained from a continuous measurement record, where the classical measurement record is digitized to a single bit per measurement after the measurements have been made. Analysis indicates that efficient and reliable state purification is achievable for one- and two-qubit systems. We also consider quantum feedback control based on the discrete one-bit measurement sequences

    A simulation system for biomarker evolution in neurodegenerative disease

    Get PDF
    We present a framework for simulating cross-sectional or longitudinal biomarker data sets from neurodegenerative disease cohorts that reflect the temporal evolution of the disease and population diversity. The simulation system provides a mechanism for evaluating the performance of data-driven models of disease progression, which bring together biomarker measurements from large cross-sectional (or short term longitudinal) cohorts to recover the average population-wide dynamics. We demonstrate the use of the simulation framework in two different ways. First, to evaluate the performance of the Event Based Model (EBM) for recovering biomarker abnormality orderings from cross-sectional datasets. Second, to evaluate the performance of a differential equation model (DEM) for recovering biomarker abnormality trajectories from short-term longitudinal datasets. Results highlight several important considerations when applying data-driven models to sporadic disease datasets as well as key areas for future work. The system reveals several important insights into the behaviour of each model. For example, the EBM is robust to noise on the underlying biomarker trajectory parameters, under-sampling of the underlying disease time course and outliers who follow alternative event sequences. However, the EBM is sensitive to accurate estimation of the distribution of normal and abnormal biomarker measurements. In contrast, we find that the DEM is sensitive to noise on the biomarker trajectory parameters, resulting in an over estimation of the time taken for biomarker trajectories to go from normal to abnormal. This over estimate is approximately twice as long as the actual transition time of the trajectory for the expected noise level in neurodegenerative disease datasets. This simulation framework is equally applicable to a range of other models and longitudinal analysis techniques

    Data-Driven Sequence of Changes to Anatomical Brain Connectivity in Sporadic Alzheimer's Disease

    Get PDF
    Model-based investigations of transneuronal spreading mechanisms in neurodegenerative diseases relate the pattern of pathology severity to the brain’s connectivity matrix, which reveals information about how pathology propagates through the connectivity network. Such network models typically use networks based on functional or structural connectivity in young and healthy individuals, and only end-stage patterns of pathology, thereby ignoring/excluding the effects of normal aging and disease progression. Here, we examine the sequence of changes in the elderly brain’s anatomical connectivity over the course of a neurodegenerative disease. We do this in a data-driven manner that is not dependent upon clinical disease stage, by using event-based disease progression modeling. Using data from the Alzheimer’s Disease Neuroimaging Initiative dataset, we sequence the progressive decline of anatomical connectivity, as quantified by graph-theory metrics, in the Alzheimer’s disease brain. Ours is the first single model to contribute to understanding all three of the nature, the location, and the sequence of changes to anatomical connectivity in the human brain due to Alzheimer’s disease. Our experimental results reveal new insights into Alzheimer’s disease: that degeneration of anatomical connectivity in the brain may be a viable, even early, biomarker and should be considered when studying such neurodegenerative diseases

    Comparison and aggregation of event sequences across ten cohorts to describe the consensus biomarker evolution in Alzheimer's disease

    Get PDF
    BACKGROUND: Previous models of Alzheimer's disease (AD) progression were primarily hypothetical or based on data originating from single cohort studies. However, cohort datasets are subject to specific inclusion and exclusion criteria that influence the signals observed in their collected data. Furthermore, each study measures only a subset of AD-relevant variables. To gain a comprehensive understanding of AD progression, the heterogeneity and robustness of estimated progression patterns must be understood, and complementary information contained in cohort datasets be leveraged. METHODS: We compared ten event-based models that we fit to ten independent AD cohort datasets. Additionally, we designed and applied a novel rank aggregation algorithm that combines partially overlapping, individual event sequences into a meta-sequence containing the complementary information from each cohort. RESULTS: We observed overall consistency across the ten event-based model sequences (average pairwise Kendall's tau correlation coefficient of 0.69 ± 0.28), despite variance in the positioning of mainly imaging variables. The changes described in the aggregated meta-sequence are broadly consistent with the current understanding of AD progression, starting with cerebrospinal fluid amyloid beta, followed by tauopathy, memory impairment, FDG-PET, and ultimately brain deterioration and impairment of visual memory. CONCLUSION: Overall, the event-based models demonstrated similar and robust disease cascades across independent AD cohorts. Aggregation of data-driven results can combine complementary strengths and information of patient-level datasets. Accordingly, the derived meta-sequence draws a more complete picture of AD pathology compared to models relying on single cohorts

    Event-Based Modeling with High-Dimensional Imaging Biomarkers for Estimating Spatial Progression of Dementia

    Full text link
    Event-based models (EBM) are a class of disease progression models that can be used to estimate temporal ordering of neuropathological changes from cross-sectional data. Current EBMs only handle scalar biomarkers, such as regional volumes, as inputs. However, regional aggregates are a crude summary of the underlying high-resolution images, potentially limiting the accuracy of EBM. Therefore, we propose a novel method that exploits high-dimensional voxel-wise imaging biomarkers: n-dimensional discriminative EBM (nDEBM). nDEBM is based on an insight that mixture modeling, which is a key element of conventional EBMs, can be replaced by a more scalable semi-supervised support vector machine (SVM) approach. This SVM is used to estimate the degree of abnormality of each region which is then used to obtain subject-specific disease progression patterns. These patterns are in turn used for estimating the mean ordering by fitting a generalized Mallows model. In order to validate the biomarker ordering obtained using nDEBM, we also present a framework for Simulation of Imaging Biomarkers' Temporal Evolution (SImBioTE) that mimics neurodegeneration in brain regions. SImBioTE trains variational auto-encoders (VAE) in different brain regions independently to simulate images at varying stages of disease progression. We also validate nDEBM clinically using data from the Alzheimer's Disease Neuroimaging Initiative (ADNI). In both experiments, nDEBM using high-dimensional features gave better performance than state-of-the-art EBM methods using regional volume biomarkers. This suggests that nDEBM is a promising approach for disease progression modeling.Comment: IPMI 201

    Opportunities and barriers for adoption of a decision-support tool for Alzheimer's Disease

    Get PDF
    Clinical decision-support tools (DSTs) represent a valuable resource in healthcare. However, lack of Human Factors considerations and early design research has often limited their successful adoption. To complement previous technically focused work, we studied adoption opportunities of a future DST built on a predictive model of Alzheimer’s Disease (AD) progression. Our aim is two-fold: exploring adoption opportunities for DSTs in AD clinical care, and testing a novel combination of methods to support this process. We focused on understanding current clinical needs and practices, and the potential for such a tool to be integrated into the setting, prior to its development. Our user-centred approach was based on field observations and semi-structured interviews, analysed through workflow analysis, user profiles, and a design-reality gap model. The first two are common practice, whilst the latter provided added value in highlighting specific adoption needs. We identified the likely early adopters of the tool as being both psychiatrists and neurologists based in research-oriented clinical settings. We defined ten key requirements for the translation and adoption of DSTs for AD around IT, user, and contextual factors. Future works can use and build on these requirements to stand a greater chance to get adopted in the clinical setting

    pySuStaIn: A Python implementation of the Subtype and Stage Inference algorithm

    Get PDF
    Progressive disorders are highly heterogeneous. Symptom-based clinical classification of these disorders may not reflect the underlying pathobiology. Data-driven subtyping and staging of patients has the potential to disentangle the complex spatiotemporal patterns of disease progression. Tools that enable this are in high demand from clinical and treatment-development communities. Here we describe the pySuStaIn software package, a Python-based implementation of the Subtype and Stage Inference (SuStaIn) algorithm. SuStaIn unravels the complexity of heterogeneous diseases by inferring multiple disease progression patterns (subtypes) and individual severity (stages) from cross-sectional data. The primary aims of pySuStaIn are to enable widespread application and translation of SuStaIn via an accessible Python package that supports simple extension and generalization to novel modeling situations within a single, consistent architecture

    Degenerative adversarial neuroimage nets for brain scan simulations: Application in ageing and dementia

    Get PDF
    Accurate and realistic simulation of high-dimensional medical images has become an important research area relevant to many AI-enabled healthcare applications. However, current state-of-the-art approaches lack the ability to produce satisfactory high-resolution and accurate subject-specific images. In this work, we present a deep learning framework, namely 4D-Degenerative Adversarial NeuroImage Net (4D-DANI-Net), to generate high-resolution, longitudinal MRI scans that mimic subject-specific neurodegeneration in ageing and dementia. 4D-DANI-Net is a modular framework based on adversarial training and a set of novel spatiotemporal, biologically-informed constraints. To ensure efficient training and overcome memory limitations affecting such high-dimensional problems, we rely on three key technological advances: i) a new 3D training consistency mechanism called Profile Weight Functions (PWFs), ii) a 3D super-resolution module and iii) a transfer learning strategy to fine-tune the system for a given individual. To evaluate our approach, we trained the framework on 9852 T1-weighted MRI scans from 876 participants in the Alzheimer's Disease Neuroimaging Initiative dataset and held out a separate test set of 1283 MRI scans from 170 participants for quantitative and qualitative assessment of the personalised time series of synthetic images. We performed three evaluations: i) image quality assessment; ii) quantifying the accuracy of regional brain volumes over and above benchmark models; and iii) quantifying visual perception of the synthetic images by medical experts. Overall, both quantitative and qualitative results show that 4D-DANI-Net produces realistic, low-artefact, personalised time series of synthetic T1 MRI that outperforms benchmark models
    corecore